8 research outputs found

    HDRfeat: A Feature-Rich Network for High Dynamic Range Image Reconstruction

    Full text link
    A major challenge for high dynamic range (HDR) image reconstruction from multi-exposed low dynamic range (LDR) images, especially with dynamic scenes, is the extraction and merging of relevant contextual features in order to suppress any ghosting and blurring artifacts from moving objects. To tackle this, in this work we propose a novel network for HDR reconstruction with deep and rich feature extraction layers, including residual attention blocks with sequential channel and spatial attention. For the compression of the rich-features to the HDR domain, a residual feature distillation block (RFDB) based architecture is adopted. In contrast to earlier deep-learning methods for HDR, the above contributions shift focus from merging/compression to feature extraction, the added value of which we demonstrate with ablation experiments. We present qualitative and quantitative comparisons on a public benchmark dataset, showing that our proposed method outperforms the state-of-the-art.Comment: 4 pages, 5 figure

    Learning ultrasound rendering from cross-sectional model slices for simulated training

    No full text
    Purpose Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised. Methods We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task. Results Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images. Conclusion A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware

    Computational analysis of subscapularis tears and pectoralis major transfers on muscular activity

    No full text
    Background: Pectoralis major is the most common muscle transfer procedure to restore joint function after subscapularis tears. Limited information is available on how the neuromuscular system adjusts to the new configuration, which could explain the mixed outcomes of the procedure. The purpose of this study is to assess how muscles activation patterns change after pectoralis major transfers and report their biomechanical implications. Methods: We compare how muscle activation change with subscapularis tears and after its treatment by pectoralis major transfers of the clavicular, sternal, or both these segments, during three activities of daily living and a computational musculoskeletal model of the shoulder. Findings: Our results indicate that subscapularis tears require a compensatory activation of the supraspinatus and is accompanied by a reduced co-contraction of the infraspinatus, both of which can be partially recovered after transfer. Furthermore, although the pectoralis major acts asynchronously to the subscapularis before the transfer, its activation pattern changes significantly after the transfer. Interpretation: The capability of a transferred muscle segment to activate similarly to the intact subscapularis is found to be dependent on the given motion. Differences in the activation patterns between intact subscapularis and the segments of pectoralis major may explain the difficulty in adapting psycho-motor patterns during the rehabilitation period. There by, rehabilitation programs could benefit from targeted training on specific motion and biofeedback programs. Finally, the condition of the anterior deltoid should be considered to improve joint function

    Proceedings of the VISCERAL Anatomy3 organ segmentation challenge ::co-located with IEEE International Symposium on Biomedical Imaging 2015

    No full text
    VISCERAL (Visual Concept Extraction Challenge in Radiology) aims to organize series of benchmarks on the processing of large-scale 3D radiology images, by using an innovative cloud-based evaluation approach. While a growing number of benchmark studies compare the performance of algorithms for automated organ segmentation in images with restricted field of views, emphasis on anatomical segmentation in images with wide field-of-view (e.g. showing entire abdomen, trunk, or the whole body) has been limited. VISCERAL Anatomy benchmark series aim to address this need by providing a common image and test dataset and corresponding segmentation challenges for a wide range of anatomical structures and image modalities. This proceedings summarize the techniques submitted for Anatomy3 benchmark, the results of which were also presented at the ISBI VISCERAL Challenge session on April 16th 2014, as part of the IEEE International Symposium on Biomedical Imaging (ISBI) in New York, NY, USA. The challenge participants used an online evaluation system, where they submitted their algorithms in a virtual machine environment. The organisers then run the virtual machines on the test images and populated the segmentation results in a participant viewable results board. Then, the participants could at their discretion upload their results to a public leaderboard. The results from the methods presented here were published in the online leaderboard two weeks before the challenge session. The short papers in this proceedings were submitted by the participants to describe their specific methodologies used to generate their results. At the session, participants had a chance to present their methods as oral presentations

    VISCERAL-VISual concept extraction challenge in radiology ::Organsegmentierung : Übersicht, Einblicke und erste Ergebnisse

    No full text
    Since during clinical routine, only a small portion of the increasing amounts of medical imaging data are accessible, this project aims to provide the necessary data for clinical image assessment in short time, and to conduct competitions for identifying successful computational strategies.WĂ€hrend der klinischen Routine kann nur ein kleiner Teil der steigenden Schnittbildmenge erfasst werden. Ziel des Projektes ist es, die nötigen Daten fĂŒr die Forschung bereitzustellen, um hieran computerbasierte Identifikationsverfahren zu testen. Projektaufbau und Ablauf der Evaluationskampagnen, verwendete DatensĂ€tze, sowie Ergebnisse der Kampagnenteilnehmer werden vorgestellt

    Hierarchical graph representations in digital pathology

    Get PDF
    Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra-and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue ( HACT ) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net . (c) 2021 Elsevier B.V. All rights reserved

    Overview of the VISCERAL Challenge at ISBI 2015

    No full text
    This is an overview paper describing the data and evaluation scheme of the VISCERAL Segmentation Challenge at ISBI 2015. The challenge was organized on a cloud-based virtual- machine environment, where each participant could develop and submit their algorithms. The dataset contains up to 20 anatomical structures annotated in a training and a test set consisting of CT and MR images with and without contrast enhancement. The test-set is not accessible to participants, and the organizers run the virtual-machines with submitted segmentation methods on the test data. The results of the evaluation are then presented to the participant, who can opt to make it public on the challenge leaderboard displaying 20 segmentation quality metrics per-organ and permodality. Dice coefficient and mean-surface distance are presented herein as representative quality metrics. As a continuous evaluation platform, our segmentation challenge leaderboard will be open beyond the duration of the VISCERAL project
    corecore